变压器验证引起了机器学习研究和行业的越来越多的关注。它正式验证了变压器对对抗性攻击的鲁棒性,例如用同义词交换单词。但是,由于以中线为中心的计算,变压器验证的性能仍然不令人满意,这与标准神经网络有显着差异。在本文中,我们提出了信仰,这是用于GPU的变压器验证的有效框架。我们首先提出一个语义意识的计算图转换,以识别语义信息,例如变压器验证中的结合计算。我们利用此类语义信息,以在计算图级别启用有效的内核融合。其次,我们提出了一个验证专门的内核手工艺品,以有效地将变压器验证映射到现代GPU。该手工艺者利用了一组GPU硬件支持,以加速通常是内存密集型的验证专业操作。第三,我们提出了一个专家指导的自动调整,以纳入有关GPU后端的专家知识,以促进大型搜索空间探索。广泛的评估表明,Faith在最先进的框架上实现了$ 2.1 \ times $至$ 3.4 \ times $($ 2.6 \ times $)的加速。
translated by 谷歌翻译
基于激光雷达的3D对象检测,语义分割和全景分段通常在具有独特架构的专业网络中实现,这些网络很难相互适应。本文介绍了Lidarmultinet,这是一个基于激光雷达的多任务网络,该网络统一了这三个主要的激光感知任务。在其许多好处中,多任务网络可以通过在多个任务中分享权重和计算来降低总成本。但是,与独立组合的单任务模型相比,它通常表现不佳。拟议的Lidarmultinet旨在弥合多任务网络和多个单任务网络之间的性能差距。 Lidarmultinet的核心是一个强大的基于3D Voxel的编码器架构,具有全局上下文池(GCP)模块,从激光雷达框架中提取全局上下文特征。特定于任务的头部添加在网络之上,以执行三个激光雷达感知任务。只需添加新的任务特定的头部,可以在引入几乎没有额外成本的同时,就可以实现更多任务。还提出了第二阶段来完善第一阶段的分割并生成准确的全景分割结果。 Lidarmultinet在Waymo Open数据集和Nuscenes数据集上进行了广泛的测试,这首先证明了主要的激光雷达感知任务可以统一在单个强大的网络中,该网络是经过训练的端到端,并实现了最先进的性能。值得注意的是,Lidarmultinet在Waymo Open数据集3D语义分割挑战2022中达到了最高的MIOU和最佳准确性,对于测试集中的22个类中的大多数,仅使用LIDAR点作为输入。它还为Waymo 3D对象检测基准和三个Nuscenes基准测试的单个模型设置了新的最新模型。
translated by 谷歌翻译
该技术报告介绍了Waymo打开数据集3D语义分割挑战2022的第一名获胜解决方案。我们的网络称为Lidarmultinet,统一了单个框架中的3D语义细分,对象检测和泛型分割等主要激光镜感知任务。 Lidarmultinet的核心是一个强大的基于3D Voxel的编码器网络,具有新型的全局上下文池(GCP)模块,从激光雷达框架中提取全局上下文特征,以补充其本地功能。提出了一个可选的第二阶段,以完善第一阶段的分割或生成准确的全景分割结果。我们的解决方案达到了71.13的MIOU,对于Waymo 3D语义细分测试集的22个类中的大多数是最好的,它的表现优于官方排行榜上所有其他3D语义分段方法。我们首次证明,可以在可以端对端训练的单个强大网络中统一重大激光感知任务。
translated by 谷歌翻译
主动演讲者的检测和语音增强已成为视听场景中越来越有吸引力的主题。根据它们各自的特征,独立设计的体系结构方案已被广泛用于与每个任务的对应。这可能导致模型特定于任务所学的表示形式,并且不可避免地会导致基于多模式建模的功能缺乏概括能力。最近的研究表明,建立听觉和视觉流之间的跨模式关系是针对视听多任务学习挑战的有前途的解决方案。因此,作为弥合视听任务中多模式关联的动机,提出了一个统一的框架,以通过在本研究中通过联合学习视听模型来实现目标扬声器的检测和语音增强。
translated by 谷歌翻译
预计变形量子算法将展示量子计算在近期嘈杂量子计算机上的优点。然而,由于算法的大小增加,训练这种变分量子算法遭受梯度消失。以前的工作无法处理由现实量子硬件的必然噪声效应引起的渐变消失。在本文中,我们提出了一种新颖的培训方案,以减轻这种噪声引起的渐变消失。我们首先介绍一种新的成本函数,其中通过在截断的子空间中使用无意程可观察来显着增强梯度。然后,我们证明可以通过从新的成本函数与梯度优化原始成本函数来达到相同的最小值。实验表明,我们的新培训方案对于各种任务的主要变分量子算法非常有效。
translated by 谷歌翻译
预计变形量子算法将展示量子计算在近期嘈杂量子计算机上的优点。然而,由于算法的大小增加,训练这种变分量子算法遭受梯度消失。以前的工作无法处理由现实量子硬件的必然噪声效应引起的渐变消失。在本文中,我们提出了一种新颖的培训方案,以减轻这种噪声引起的渐变消失。我们首先介绍一种新的成本函数,其中通过在截断的子空间中使用无意程可观察来显着增强梯度。然后,我们证明可以通过从新的成本函数与梯度优化原始成本函数来达到相同的最小值。实验表明,我们的新培训方案对于各种任务的主要变分量子算法非常有效。
translated by 谷歌翻译
Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data.
translated by 谷歌翻译
A step-search sequential quadratic programming method is proposed for solving nonlinear equality constrained stochastic optimization problems. It is assumed that constraint function values and derivatives are available, but only stochastic approximations of the objective function and its associated derivatives can be computed via inexact probabilistic zeroth- and first-order oracles. Under reasonable assumptions, a high-probability bound on the iteration complexity of the algorithm to approximate first-order stationarity is derived. Numerical results on standard nonlinear optimization test problems illustrate the advantages and limitations of our proposed method.
translated by 谷歌翻译
Masked image modeling (MIM) has shown great promise for self-supervised learning (SSL) yet been criticized for learning inefficiency. We believe the insufficient utilization of training signals should be responsible. To alleviate this issue, we introduce a conceptually simple yet learning-efficient MIM training scheme, termed Disjoint Masking with Joint Distillation (DMJD). For disjoint masking (DM), we sequentially sample multiple masked views per image in a mini-batch with the disjoint regulation to raise the usage of tokens for reconstruction in each image while keeping the masking rate of each view. For joint distillation (JD), we adopt a dual branch architecture to respectively predict invisible (masked) and visible (unmasked) tokens with superior learning targets. Rooting in orthogonal perspectives for training efficiency improvement, DM and JD cooperatively accelerate the training convergence yet not sacrificing the model generalization ability. Concretely, DM can train ViT with half of the effective training epochs (3.7 times less time-consuming) to report competitive performance. With JD, our DMJD clearly improves the linear probing classification accuracy over ConvMAE by 5.8%. On fine-grained downstream tasks like semantic segmentation, object detection, etc., our DMJD also presents superior generalization compared with state-of-the-art SSL methods. The code and model will be made public at https://github.com/mx-mark/DMJD.
translated by 谷歌翻译
Considering the computation complexity, we propose a Guided Hybrid Quantization with One-to-one Self-Teaching (GHOST}) framework. More concretely, we first design a structure called guided quantization self-distillation (GQSD), which is an innovative idea for realizing lightweight through the synergy of quantization and distillation. The training process of the quantization model is guided by its full-precision model, which is time-saving and cost-saving without preparing a huge pre-trained model in advance. Second, we put forward a hybrid quantization (HQ) module to obtain the optimal bit width automatically under a constrained condition where a threshold for distribution distance between the center and samples is applied in the weight value search space. Third, in order to improve information transformation, we propose a one-to-one self-teaching (OST) module to give the student network a ability of self-judgment. A switch control machine (SCM) builds a bridge between the student network and teacher network in the same location to help the teacher to reduce wrong guidance and impart vital knowledge to the student. This distillation method allows a model to learn from itself and gain substantial improvement without any additional supervision. Extensive experiments on a multimodal dataset (VEDAI) and single-modality datasets (DOTA, NWPU, and DIOR) show that object detection based on GHOST outperforms the existing detectors. The tiny parameters (<9.7 MB) and Bit-Operations (BOPs) (<2158 G) compared with any remote sensing-based, lightweight or distillation-based algorithms demonstrate the superiority in the lightweight design domain. Our code and model will be released at https://github.com/icey-zhang/GHOST.
translated by 谷歌翻译